Search Results: "Bernhard R. Link"

4 April 2012

Bernhard R. Link: The wonders of debian/rules build-arch

It has taken a decade to get there, but finally the buildds are able to call debian/rules build-arch. Compare the unfinished old build
 Finished at 20120228-0753
 Build needed 22:25:00, 35528k disc space
with the new one on the same architecture finally only building what is needed
 Finished at 20120404-0615
 Build needed 00:11:28, 27604k disc space

1 April 2012

Raphaël Hertzog: My Debian Activities in March 2012

This is my monthly summary of my Debian related activities. If you re among the people who made a donation to support my work (227.83 , thanks everybody!), then you can learn how I spent your money. Otherwise it s just an interesting status update on my various projects. Dpkg Thanks to Guillem, dpkg with multiarch support is now available in Debian sid. The road has been bumpy, and it has again been delayed multiple times even after Guillem announced it on debian-devel-announce. Finally, the upload happened on March 19th. I did not appreciate his announce because it was not coordinated at all, and had I been involved from the start, we could have drafted it in a way that sounded less scary for people. In the end, I provided a script so that people can verify whether they were affected by one of the potential problems that Guillem pointed out. While real, most of them are rather unlikely for typical multiarch usage. Bernhard R. Link submitted a patch to add a new status command to dpkg-buildflags. This command would print all the information required to understand which flags are activated and why. It would typically be called during the build process by debian/rules to keep a trace of the build flags configuration. The goal is to help debugging and also to make it possible to extract that information automatically from build logs. I reviewed his patch and we made several iterations, it s mostly ready to be merged but there s one detail where Bernhard and I disagree and I solicited Guillem s opinion to try to take a decision. Unfortunately neither Guillem nor anyone else chimed in. On request of Alexander Wirt, I uploaded a new backport of dpkg where I dropped the DEB_HOST_MULTIARCH variable from dpkg-architecture to ensure multi-arch is never accidentally enabled in other backports. One last thing that I did not mention publicly at all yet, is that I contacted Lennart Poettering to suggest an improvement to the /etc/os-release file that he s trying to standardize across distributions. It occurred to me that this file could also replace our /etc/dpkg/origins/default file (and not only /etc/debian_version) provided that it could store ancestry information. After some discussions, he documented new official fields for that file (ID_LIKE, HOME_URL, SUPPORT_URL, BUG_REPORT_URL). Next step for me is to improve dpkg-vendor to support this file (as a fallback or as default, I don t know yet). Packaging I packaged quilt 0.60 (we re now down to 9 Debian-specific patches, from a whopping 26 in version 0.48!) and zim 0.55. In prevision of the next upstream version of Publican, I asked the Perl team to package a few Perl modules that Publican now requires. Less than two weeks after, all of them were in Debian Unstable. Congrats and many thanks to the Perl team (and Salvatore Bonaccorso in particular, which I happen to know because we were on the same plane during last Debconf!). On a side note, being the maintainer of nautilus-dropbox became progressively less fun over the last months, in particular because the upstream authors tried to override some of the (IMO correct) packaging decisions that I made and got in touch with Ubuntu community managers to try to have their way. Last but not least, I keep getting duplicates of a bug that is not in my package but in the official package and that Dropbox did not respond to my query. Book update The translation is finished and we re now reviewing the whole book. It takes a bit more time than expected because we re trying to harmonize the style and because it s difficult to coordinate the work of several volunteer reviewers. The book cover is now almost finalized (click on it to view it in higher definitions): We also made some progress on the interior design for the paperback. Unfortunately, I have nothing to show you yet. But it will be very nice and made with just a LaTeX stylesheet tailored for use with dblatex. The liberation fundraising slowed down with only 41 new supporters this month but it made a nice bump anyway thanks to a generous donation of 1000 EUR by Offensive security, the company behind Backtrack Linux. They will soon communicate on this, hopefully it will boost the operation. It would be really nice if we managed to raise the remaining 3000 EUR in the few weeks left until the official release of the book! The work on my book dominated the month and explains my relative inactivity on other fronts. I worked much more than usual, and my wife keeps telling me that I look tired and that I should go in bed earlier but I see the end of the tunnel: if everything goes well, the book should be released in a few weeks and I will be able to switch back to a saner lifestyle. Thanks See you next month for a new summary of my activities.

One comment Liked this article? Click here. My blog is Flattr-enabled.

28 December 2011

Bernhard R. Link: symbol files: With great power comes great responsibility

Symbol files are a nice little feature to reduce dependencies of packages. Before there were symbol files libraries in Debian just had shlibs files (both to be found in /var/lib/dpkg/info/. A shlibs file says for each library which packages to depend on when using this library. When a package is created, the build scripts will usually call dpkg-shlibdeps, which then looks which libraries the programs in the library use and then calculate the needed dependencies. This means the maintainers of most packages do not have to care what libraries to depend on, as it is automatically calculated. And as compiling and linking against a newer version of a library can cause the program to no longer work with an older library, it also means those dependencies are correct regardless of which version of a library is compiled against. As shlibs files only have one dependency information per soname, that also means they are quite strict: If there is any possible program that would not work with an older version of a library, then the shlibs file must pull in a dependency for the newer version, so everything needing that library ends up depending on the newer version. As most libraries added new stuff most of the time, most library packages (except some notable extremely API stable packages like for example some X libs) just chose to automatically put the latest package version in the shlibs file. This of course caused library packages to be quite strict: Almost every package depended on the latest version of all libraries, including libc, so practically no package from unstable or testing could be used in stable. To fix this problems, symbols files were introduced. A symbols file is a file (also finally installed in /var/lib/dpkg/info/ alongside the shlibs file) to give a minimum version for each symbol found in the library. The idea is that different programs use different parts of a library. Thus if new functionality is introduced, it would be nice to differentiate which functionality is used and give dependencies depending on that. As the only thing programmatically extractable from a binary file is the list of dynamic symbols used, this is the information used for that. But this only means the maintainer of the library package has now not only one question to answer ("What is the minimal version of this library a program compiled against the current version will need?"), but many questions: "What is the minimal version of this library a program compiled against the current version and referencing this symbol name will need?". Given a symbols file of the last version of a library package and the libraries in the new version of the package, there is one way to catch obvious mistakes: If a symbol was not in the old list but is not in the current library, one needs at least the current version of the library. So if dpkg-gensymbols finds a missing symbol, it will add it with the current version. While this will never create dependencies too strict, it sadly can have the opposite effect of producing dependencies that are not strict enough: Consider for example some library exporting the following header file:
enum foo_action   foo_START, foo_STOP ;
void do_foo_action(enum foo_action);
Which in the next version looks like that:
enum foo_action   foo_START, foo_STOP, foo_RESTART ;
void do_foo_action(enum foo_action);
As the new enum value was added at the end, the numbers of the old constants did not change, so the API and ABI did not change incompatibly, so a program compiled against the old version still works with the new one. (that means: upstream did their job properly). But the maintainer of the Debian package faces a challenge: There was no new symbol added, dpkg-gensymbols will not see that anything changed (as the symbols are the same). So if the maintainer forgets to manually increase the version required by the do_foo_action symbol, it will still be recorded in the symbols file as needing the old version. Thus dpkg will not complain if one tries to install the package containing the program together with the old version of the library. But if that program is called and calls do_foo_action with argument 2 (foo_RESTART), it will not behave properly. To recap:

25 November 2011

Bernhard R. Link: checking buildd logs for common issues

Being tired of feeling embarrassed when noticing some warning in a buildd log only after having it uploaded and looking at the buildd logs of the other architectures, I've decided to write some little infrastructure to scan buildd logs for things that can be found in buildd logs. The result can be visited at https://buildd.debian.org/~brlink/. Note that it currently only has one real check (looking for E: Package builds NAME_all.deb when binary-indep target is not called.) yet and additionally two little warnings (dh_clean -k and dh_installmanpages deprecation output) which lintian could at catch just as well. The large size of the many logs to scan is a surprisingly small problem. (As some tests indicated it would only take a couple of minutes for a full scan, I couldn't help to run one full scan, only to learn afterwards that wb-team was doing the import of the new architectures at that time. Oops!) More surprising for me using small files to keep track of logs already scanned does not scale at all with the large number of source packages. File system overhead is gigantic and it makes the whole process needlessly IO bound. That problem was be easily solved using sqlite to track things done, though as buildd.debian.org doesn't have that installed yet, so no automatic updates yet. [Update: already installed, will be some semi-automatic days first, though anyway] The next thing to do is writing more checks, where I hope for some help from you: What kind of diagnostics do you know from buildd logs that you would like to be more prominently visible (hopefully soon on packages.qa.debian.org, wishlist item already filed)? Trivial target is everything that can be identified from regular expression applied to every line of the buildd log. For such cases the most complicated part is writing a short description of what this message means. (So if you sent me some suggestions, I'd be very happy to also get a short text suitable for that, together with the message to looks for and ideally some example package having that message in its buildd log). I'm also considering some more complicated tests. I'd really like to have something to test for packages being built multiple times to due Makefile errors and stuff like that.

1 November 2011

Bernhard R. Link: File ownership and permissions in Debian packages

As you will know, every file on a unixoid system has some meta-data like owner, group and permission bits. This is of course also true for files part of some Debian package. And it is not very surprising that different files should have different permissions. Or even different owners or groups. Which file has which settings is of course for the package maintainer to decide and Debian would not be Debian if there were not ways for the user to give their own preferences and have them preserved. This post is thus about how those settings end up in the package and what is to be observed when doing it. As you will also have heard, a .deb file is simply a tar archive stored as part of an ar archive, as you can verify by unpacking a package manually:
$ ar t reprepro_4.8.1-1_amd64.deb
debian-binary
control.tar.gz
data.tar.gz
$ ar p reprepro_4.8.1-1_amd64.deb data.tar.gz   gunzip   tar -tvvf -
drwxr-xr-x root/root         0 2011-10-10 12:05 ./
drwxr-xr-x root/root         0 2011-10-10 12:05 ./etc/
drwxr-xr-x root/root         0 2011-10-10 12:05 ./etc/bash_completion.d/
-rw-r--r-- root/root     19823 2011-10-10 12:05 ./etc/bash_completion.d/reprepro
drwxr-xr-x root/root         0 2011-10-10 12:05 ./usr/
drwxr-xr-x root/root         0 2011-10-10 12:05 ./usr/share/
--More--
(For unpacking stuff from scripts, you should of course use dpkg-deb --fsys-tarfile instead of ar gunzip. Above example is about the format, not a recipe to unpack files). This already explains how the information is usually encoded in the package: A tar file contains that information for each contained file and dpkg is simply using that information. (As tar stores numeric owner and group information, that limits group and owner information to users and groups with fixed numbers, i.e. 0-99. Other cases will be covered later.) The question for the maintainer is now: Where is the information which file has which owner/group/permissions in the .tar inside the .deb, and the answer is simple: It's taken from the files to be put into the .deb. This means that package tools could simply be implemented first by simply calling tar and there is no imminent need to write you own tar generator. It also means that the maintainer has full control and does not have to learn new descriptive languages or tools to change permissions, but can simply put the usually shell commands into debian/rules. There are some disadvantages, though: A normal user cannot change ownership of files and one has to make sure all files have proper permissions and owners. This means that dpkg-deb -b (or the usually used wrapper dh_builddeb) must be run in some context where you could change the file ownership to root first. This means you either need to be root, or at least to fake being root by using fakeroot. (While this could be considered some ugly workaround, it also means upstream's make install is run believing to be root, which also avoids some -- for a packager -- quite annoying automatisms in upstream build scripts assuming a package is not installed system wide if not installed as root). Another problem are random build host characteristics changing how files are created in the directory later given to dpkg-deb -b. For example an umask which might make all files non-world-readable by default. The usual workaround is to first fix up all those permissions. Most packages use dh_fixperms for this, which also sets executable bits according to some simple rules and has some more special cases so that the overall majority of packages does not need to look at permissions at all. So using some debhelper setup, every special permissions and all owner/group information for owner groups with fixed numbers only needs to be set using the normal command line tools between dh_fixperms and dh_builddeb. Everything else happens automatically. Note that games is a group with fixed gid. So it is not necessary (and usually a bug) to change group-ownership of files withing the package to group games in maintainer scripts (postinst,...). If a user wants to change permissions or ownership of a file, dpkg allows this using the dpkg-statoverride command. This command essentially manages a list of files to get special treatment and ownership and permission information they should get. This way a user can specify that files should have different permissions and this setting is applied if a new version of this file is installed by dpkg. Being a user setting especially means, that packages (that means their maintainer scripts) should not usually use dpkg-statoverride. There are two exceptions, though: Different permissions based on interaction with the user (e.g. asking question with debconf) and dynamically allocated users/groups with dynamic id. In both cases one should note that settings to dpkg-statoverride are settings of the user, so the same care should be given as to files in /etc, especially one should never override something the user has set in there. (I can think of no example where calling dpkg-statoverride --add without dpkg-statoverride --list in some maintainer script is not a serious bug: Either you override user settings or you are using debconf as a registry. Moral To recap, your package is doing something wrong if:

1 August 2011

Bernhard R. Link: About feature branches for patch handling and reverting to old states.

Now that the debconf videos are available (big THANKS to the video team), I was able to watch the talk about packages in git at Debconf11 and wanted to share some insights:

5 April 2011

Bernhard R. Link: Paternalism and Freedom

As seen in some mailinglist discussion:
"It seems to be a common belief between some developers that users should have to read dozens of pages of documentation before attempting to do anything.
"I m happy that not all of us share this elitist view of software. I thought we were building the Universal Operating System, not the Operating System for bearded gurus."
I think this is an interesting quote as it shows an important misunderstand of what Debian is for many people. Debian (and Linux in general) was in its beginnings quite complicated and often not very easy to use. People still felt a deep relieve to have it and a strong love. Why? Because it's not so much about about how you can use it, but how you can fix it. A system that only has a nice surface and hides everything below it, that does in a majority of cases just what you most likely want it is nice. But if the only options are "On", "Off" and perhaps some "something is not working as it should, try to fix it" (aka "Repair") it is essentially a form of paternalism: There is your superior that decided what is good for you, you would not understand it anyway, just swallow what you get. Not very surprisingly many people to be in the position of the inferior of a computer (the less the more stupid the computer is, but even modern computers are still stupid enough for most people). So what those people want is not necessarily a system that can only be used after reading a dozen pages of documentation, but a system they know they can force to do what they want even if that might then mean reading some pages of documentation. So a good software in that sense might have some nice interface and some defaults working most of the time. But more importantly it has a good documentation, easy enough internals so one can grasp them and be transparent enough that one can understand why it is currently not working and what to do against this and then allowing enough user interference to fix it. If all I get offered is some "interface for users too stupid to understand it anyway" and all options to fix it are checking and unchecking all boxes and restarting a lot or perhaps some gracious offer of "There is the source code, just read that spaghetti code, you can there see anything it does though you might need to build a debug version just to see why it does not work" then I would not call any strong feelings against this situation "elitist".

21 January 2011

Cyril Brulebois: Debian XSF News #2

Time for a second Debian XSF News issue! I ll skip stuff related to the new katamari since I wrote about that already.

17 December 2010

Bernhard R. Link: C Code to avoid

One of the bad aspects of the the C Programing language is that it silently allows many bad C programs. Together with the widespread use of an architecture that is very bad at catching errors (i386), this sometimes leads to common ideoms that are only working accidentially. This is bad as they often break on other architectures and can break with every new optimisations or new feature the compiler adds. Take for example a look at the code there (I tried to leave a comment there but did not succeed): If you see something like this:
     char buffer[1000];
     struct thingy *header;
     header = (struct thingy *)buffer;
then it is time to run. I hope you do not depend on this software, because it is a pure accident if this is doing anything at all. While you can cast a char * to a struct, that is only allowed if that memory actually was this struct (or one compatible, like a struct with the same initial part and you are only accessing that part). In this case it is obviously not (it's just an array of char), so you might see bus errors or random values if the compiler does no optimisations and you are on a architecture where alignment matters. Or the compiler might optimize it to whatever it wants to, because the C compiler is allowed to do everything with code like that. The next problem is the one that post was about: You are not allowed to access an array after its end. Something like
 struct thingy  
     int magic;
     char data[4];
  
means you may only access the first 4 bytes of data. If you access more than that it may work now on your machine, but it can stop tomorrow with the next revision of the compiler or on another machine. If you have a struct with a variable length data block, then you can use the new C99 feature of char data[] or the old gcc extension of char data[0]. Or you can use unions. (Or in some case use the behaviour of structs with the same initial parts). If you use C code with undefined semantics then every new compiler might break it with some optimisation. There is often the tempting option of just using a slightly different code that currently works. But in the not too distant future the compiler (or even some processor) might again get some new optimisations and the code will break again. Fixing it properly might be harder but it's less likely it will fail to compile again and it also reduces the chances that it will not fail to compile but simply do something you did not expect.

6 October 2010

Bernhard R. Link: git-dpm 0.3.0

I've just uploaded git-dpm 0.3.0-1 packages to experimental. Apart from many bugfixes (which I will also take a look if I can make an 0.2.1 version targeting squeeze, though the freeze requirements tend to get tighter and tighter, so I may already be too late), the biggest improvement is the newly added git-dpm dch command to spawn dch and then extracts something to be used for the git commit message (I prefer to have more control over debian/changelog, so I prefer this way over the other direction).

15 August 2010

Bernhard R. Link: common inefficient shell code

There is hardly any use in:
cat filename   while ...
do
...
done
Just do:
while ...
do
...
done < filename
If you want the while run in a subshell, use some parentheses, but you do not need the cat at all. Another unnecessary inefficient idiom often seen is
foo="$(echo "$bar"   sed -e 's/ .*//')"
which can be replaced with the less forky
foo="$ bar%% * "
Similary there is
foo="$ bar% * "
as short and fast variant of
foo="$(echo "$bar"   sed -e 's/ [^ ]*$//')"
and the same with # instead of % for removing stuff from the beginning. (Note that both are POSIX, only the $ name/re/new not discussed here is bash-specific).

2 August 2010

Bernhard R. Link: git-dpm 0.2.0 with import-dsc

I've uploaded git-dpm version 0.2.0. Most noteable change and the one which could need some testing is the new git-dpm import-dsc, which will import a .dsc file and try to import the patches found into git.<\p>

29 May 2010

Bernhard R. Link: Ghostscript brain-dead...

Some little warning to everyone using ghostscript: Ghostscript always looks for files in the current directory first, including a file that is always executed first (before any safe mode is activated). So by running ghostscript in a directory you do not control, you might execute arbitrary stuff. Two things make this worse: You have been warned.

19 February 2010

Bernhard R. Link: reprepro 4.1.0 and new Packages.diff generation

I've just released reprepro 4.1.0 and uploaded it to unstable. The most noteworthy change and the one where I need your help is that the included rredtool program can now generate Packages.diff/Index files when used as export-hook by reprepro. (Until now you had to use the included tiffany.py script, which is a fork of the script the official Debian archives use. That script is still included in case you prefer the old method). So instead
DscIndices: Sources Release . .gz tiffany.py
DebIndices: Packages Release . .gz tiffany.py
you can now use
DscIndices: Sources Release . .gz /usr/bin/rredtool
DebIndices: Packages Release . .gz /usr/bin/rredtool
to get the new diff generator. The new diff generator has am import difference to the old one: It merges patches so every client should only need to download and apply a single patch, and not multiple after each other, thus reducing the disadvantages of Packages.diff files a bit (and sometimes even reducing the amount of data to download considerably). While reprepro and apt-get (due to carefully working around bugs/shortcomings of older versions of apt) seem to work, I don't know if there are other users of those files that could be suprised by that. If you know any I'd be glad if you could test them or tell me about them.

25 January 2010

Joachim Breitner: Serna XML editor uploaded to Debian

The XML-Editor Serna by Syntext has been published as Free Software a few months ago. This was very good news, because there was a lack of a good free XML editors with a good graphical view on DocBook documents, which I needed to recommend to users of zpub. Therefore, I investigated packaging Serna for Debian. I had to patch a few things to make it compile on and64 and to use components shipped by Debian where possible. Today, I could finally close the RFP bug filed by W. Martin Borgert, as the serna package was accepted by the ftp-masters. The first bug (SEGFAULT on startup on lenny) is already filed. I hope this is a good sign, as it shows that there is interest in the package. For my packaging workflow, I used git-svn to import the upstream SVN branch into a git repository. I then use git-dpm by Bernhard R. Link to manage my changes as patches in the new 3.0 (quilt) debian source package. I must say that I prefer this approach to git-buildpackage, as there is only one git branch to publish. I hope that Bernhard uploads git-dpm to Debian soon. Serna is quite a big software project and uses stuff that I know little about (Qt, C++ with python interaction etc.). Also, the package currently bundles the DITA-OT package, which should rather be packaged separately. Therefore, I d be glad if co-maintainers would join the effort.

9 January 2010

Bernhard R. Link: git-dpm now as alioth project

Git-dpm can now be found at http://git-dpm.alioth.debian.org/ and the source at git://git.debian.org/git/git-dpm/git-dpm.git Functionality should now be mostly complete, so testers really needed now.

3 January 2010

Bernhard R. Link: Alpha testers wanted

If you ever tried to determine what patches other distributions apply to some package you are interested in, you might have come to the same conclusion as I: It is quite an impudence how those are presented. If you don't give up, you end up with programs or scripts to extract many propietary source package formats, more VCS systems installed than you think there should exist. Thats when you start to love the concept that every Debian mirror has next to each binary package the source in a format that you can extract the changes easily with only tools you find on every unixoid. And that's why I love the new (though in my eyes quite misnamed) "3.0 (quilt)" format, because that makes it even clearer and easier. Sadly one problem remained: How to generate and store those patches? While you can just use patches manually or use quilt to handle those patches and store the result in a vcs of your choice, the newfangled VCSes (especially git) became quite good at managing, moving and merging changes around, so it seems quite a waste to not be able to use this also to handle those patches easily. While one can either use git to handle a patchset, by storing it as a chain of commits and using the interactive rebase, or using git to store the history of your package, doing both at the same time is tricky and not resonably doable with git provided porcelain. Thus I wrote my own tool to faciliate git for both tasks at the same time. The idea is to have three branches: a branch storing the history of the your package, a branch storing your patches in a way suitable to submit them upstream or to create a debian/patches/ directory from, and an branch with the upstream contents. I've an implemenation which seems to already work, though I am sure there is still much to improve and many errors and pitfalls still to find. Thus if you also like to experiment with handling patches of a debian package in git, take a look at the manpage or the program at git://git.debian.org/~brlink/git-dpm.git
(WARNING: as stated above: alpha quality; also places are temporary and are likely to change in the future).

23 October 2009

Bernhard R. Link: I'll never understand why some people consider it acceptable to depend on udev

This is just a reminder for all of you that have packages that depend on the udev package: I hate you. A Debian package depending on the udev package (with very few exceptions like for example the initramfs-tools package that actually uses udev) is so wrong.

3 October 2009

Bernhard R. Link: An argument for symbol versioning

A little example for why it is nice to have symbol versioning in libraries. Safe the following as test.sh. Call without arguments: segfault; call with argument "half": segfault; call with argument "both": works.
#!/bin/sh
cat >s1.h <<EOF
extern void test(int *);
#define DO(x) test(x)
EOF
cat >libs1.c <<EOF
#include <stdio.h>
#include "s1.h"
void test(int *a)  
	printf("%d\n", *a);
 
EOF
cat >libs1.map <<EOF
S_1  
 global:
  test;
 ;
EOF
cat >s2.h <<EOF
extern void test(int);
#define DO(x) test(*x)
EOF
cat >libs2.c <<EOF
#include <stdio.h>
#include "s2.h"
void test(int a)  
	printf("%d\n", a);
 
EOF
cat >libs2.map <<EOF
S_2  
 global:
  test;
 ;
EOF
cat >a.h <<EOF
void a(void);
EOF
cat >liba.c <<EOF
#include "s.h"
#include "a.h"
void a(void)  
	int b = 4;
	DO(&b);
 
EOF
cat > test.c <<EOF
#include "a.h"
#include "s.h"
int main()  
	int b = 3;
	DO(&b);
	a();
	return 0;
 
EOF
rm -f liba.so libs.so* test s.h
if test $# -le 0   test "x$1" != "xboth" ; then
gcc -Wall -O2 -shared -o libs.so.1 -Wl,-soname,libs.so.1 libs1.c
else
gcc -Wall -O2 -shared -o libs.so.1 -Wl,-soname,libs.so.1 -Wl,-version-script libs1.map libs1.c
fi
if test $# -le 0 ; then
gcc -Wall -O2 -shared -o libs.so.2 -Wl,-soname,libs.so.2 libs2.c
else
gcc -Wall -O2 -shared -o libs.so.2 -Wl,-soname,libs.so.2 -Wl,-version-script libs2.map libs2.c
fi
ln -s libs.so.1 libs.so
ln -s s1.h s.h
gcc -Wall -O2 -shared -o liba.so -Wl,-soname,liba.so liba.c -L. -ls
gcc -Wall -O2 test.c -L. -ls -la -o test
rm libs.so s.h
ln -s libs.so.2 libs.so
ln -s s2.h s.h
gcc -Wall -O2 -shared -o liba.so -Wl,-soname,liba.so liba.c -L. -ls
LD_LIBRARY_PATH=. ./test

16 January 2009

Bernhard R. Link: Multiple filesystems for the paranoid

Given the current discussion on planet.debian.org about having only one or multiple file-systems, I just wanted to add a plea for having multiple filesystems. In my (perhaps a bit overly paranoid) eyes, having multiple filesystems is mainly a security measure. I prefer having enough partitions so that the following properties hold: Admittedly, those arguments may not be as convincing for a laptop as for a server. But I personally like to have paranoia enacted everywhere. Uniformness makes live much easier sometimes. Update: If having paranoid in the title was not enough to hint you that I do not claim a system losing a significant amount of security compared to more important measures, let it be told to you know. It's all about thinking about even the little things and taking measures where they do not otherwise harm. To get the warm fuzzy fealing I got when e.g. CVE-2006-3626 was found and my computers had nosuid for /proc already set. ;->

Next.

Previous.